Pride and Perjury: A Computational Characterization of Multiagent Games with Fallacious Rewards

نویسندگان

  • Ariel D. Procaccia
  • Jeffrey S. Rosenschein
چکیده

Agents engaged in noncooperative interplay often seek convergence to Nash equilibrium; this requires that agents be aware of others’ rewards. Misinformation about rewards leads to a gap between the real interaction model — the explicit game — and the game which the agents perceive — the implicit game. We identify two possible sources of fallacious rewards: pride and perjury. If estimation of rewards is based on modeling, a proud agent is likely to err. We define a robust equilibrium, which is impervious to slight perturbations, and prove that one can be efficiently pinpointed. We relax this concept by introducing persistent equilibrium pairs — pairs of equilibria of the explicit and implicit games with nearly identical rewards — and resolve associated complexity questions. Supposing agents simply reveal their valuations for different outcomes of the game, perjuring agents may report false rewards in order to improve their payoff. We define the Game-Manipulation (GM) decision problem, and fully characterize the complexity of this problem and some variations.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Investigating the Effect of Rewards on Individual Players' Efforts: A Behavioral Approach

The main goal of the study is to examine the effect of rewards on the behavior of players in a team activity. In this framework, by performing 12 sequential and simultaneous games in a laboratory environment examine the rewarding effect on players' behavior. Students from Yazd universities surveyed and the sample of 182 students is in the form of two groups, which collected in total for 2184 ma...

متن کامل

Multiagent Q-Learning: Preliminary Study on Dominance between the Nash and Stackelberg Equilibriums

Some game theory approaches to solve multiagent reinforcement learning in self play, i.e. when agents use the same algorithm for choosing action, employ equilibriums, such as the Nash equilibrium, to compute the policies of the agents. These approaches have been applied only on simple examples. In this paper, we present an extended version of Nash Q-Learning using the Stackelberg equilibrium to...

متن کامل

A Multiagent Reinforcement Learning algorithm to solve the Community Detection Problem

Community detection is a challenging optimization problem that consists of searching for communities that belong to a network under the assumption that the nodes of the same community share properties that enable the detection of new characteristics or functional relationships in the network. Although there are many algorithms developed for community detection, most of them are unsuitable when ...

متن کامل

An Experimental Study of Incentive Reversal in Sequential and Simultaneous Games

I t is commonly held that increasing monetary rewards enhance work effort. This study, however, argues that this will not ineludibly occur in team activities. Incentive Reversal may occur in sequential team productions featuring positive external impacts on agents. This seemingly paradoxical event is explained through two experiments in this article. The first experiment involves a sample ...

متن کامل

Argotario: Computational Argumentation Meets Serious Games

An important skill in critical thinking and argumentation is the ability to spot and recognize fallacies. Fallacious arguments, omnipresent in argumentative discourse, can be deceptive, manipulative, or simply leading to ‘wrong moves’ in a discussion. Despite their importance, argumentation scholars and NLP researchers with focus on argumentation quality have not yet investigated fallacies empi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006